parameter initializers
- default: initialize parameters according to the original paper
- normal: initialize parameters with normal distribution
- uniform: initialize parameters with uniform distribution
- xavier_normal: initialize parameters with xavier_normal distribution
- xavier_uniform: initialize parameters with xavier_uniform distribution
optimization method for training the algorithms
- default (optimizer in the original paper)
- sgd
- adam
- adagrad
whether to activate the early-stop mechanism
- true
- false
whether to directly tune on testset, and the default value is false
- true
- false
the dimension of latent factors (embeddings)
the coefficient of L1 regularization
the coefficient of L2 regularization
dropout rate
learning rate
training epochs
batch size for training
number of layers for MLP
constant to multiply the penalty terms for SLIM
the ElasticNet mixing parameter for SLIM in the range of (0,1)
the preliminary selected top-n popular candidate items to reduce the time complexity for MostPop
the number of neighbors to take into account for ItemKNN
node dropout ratio for NGCF
message dropout ratio for NGCF
the coefficient of KL regularization for Multi-VAE